Session E-4

Optimization

Conference
8:30 AM — 10:00 AM EDT
Local
May 18 Thu, 8:30 AM — 10:00 AM EDT
Location
Babbio 219

Robustified Learning for Online Optimization with Memory Costs

Pengfei Li (UC Riverside, USA); Jianyi Yang and Shaolei Ren (University of California, Riverside, USA)

1
Online optimization with memory costs has many real-world applications, where sequential actions are made without knowing the future input. Nonetheless, the memory cost couples the actions over time, adding substantial challenges. Conventionally, this problem has been approached by various expert-designed online algorithms with the goal of achieving bounded worst-case competitive ratios, but the resulting average performance is often unsatisfactory. On the other hand, emerging machine learning (ML) based optimizers can improve the average performance, but suffer from the lack of worst-case performance robustness. In this paper, we propose a novel expert-robustified learning (ERL) approach, achieving both good average performance and robustness. More concretely, for robustness, ERL introduces a novel projection operator that robustifies ML actions by utilizing an expert online algorithm; for average performance, ERL trains the ML optimizer based on a recurrent architecture by explicitly considering downstream expert robustification process. We prove that, for any \(\lambda\geq1\), ERL can achieve \(\lambda\)-competitive against the expert algorithm and \(\lambda\cdot C\)-competitive against the optimal offline algorithm (where\(C\) is the expert's competitive ratio). Additionally, we extend our analysis to a novel setting of multi-step memory costs. Finally, our analysis is supported by empirical experiments for an energy scheduling application.
Speaker Pengfei Li (University of California, Riverside)

Pengfei Li is a third-year CS Ph.D. student in University of California, Riverside, under the supervision of Prof. Shaolei Ren. He obtained a M.S.E degree in Laboratory for Computational Sensing and Robotics (LCSR), Johns Hopkins University, under the supervision of Prof. Alan Yuille and Prof. Gregory Hager. Pengfei’s research focuses on online optimization, scheduling, Machine Learning and applications in sustainable AI. His recent work on AI water footprint has been widely reported by the major media (e.g. Barron’s, Forbes, Fox News, CNN News 18, etc)


Online Distributed Optimization with Efficient Communication via Temporal Similarity

Juncheng Wang and Ben Liang (University of Toronto, Canada); Min Dong (Ontario Tech University, Canada); Gary Boudreau and Ali Afana (Ericsson, Canada)

0
We consider online distributed optimization in a networked system, where multiple devices assisted by a server collaboratively minimize the accumulation of a sequence of global loss functions that can vary over time. To reduce the amount of communication, the devices send quantized and compressed local decisions to the server, resulting in noisy global decisions. Therefore, there exists a tradeoff between the optimization performance and the communication overhead. Existing works separately optimize computation and communication. In contrast, we jointly consider computation and communication over time, by encouraging temporal similarity in the decision sequence to control the communication overhead. We propose an efficient algorithm, termed Online Distributed Optimization with Temporal Similarity (ODOTS), where the local decisions are both computation- and communication-aware. Furthermore, ODOTS uses a novel tunable virtual queue, which completely removes the commonly assumed Slater's condition through a modified Lyapunov drift analysis. ODOTS delivers provable performance bounds on both the optimization objective and constraint violation. As an example application, we apply ODOTS to enable communication-efficient federated learning. Our experimental results based on real-world image classification demonstrate that ODOTS obtains higher classification accuracy and lower communication overhead compared with the current best alternatives for both convex and non-convex loss functions.
Speaker Ben Liang

Ben Liang received honors-simultaneous B.Sc. (valedictorian) and M.Sc. degrees in Electrical Engineering from Polytechnic University (now the engineering school of New York University) in 1997 and the Ph.D. degree in Electrical Engineering with a minor in Computer Science from Cornell University in 2001. He was a visiting lecturer and post-doctoral research associate at Cornell University in the 2001 - 2002 academic year. He joined the Department of Electrical and Computer Engineering at the University of Toronto in 2002, where he is now Professor and L. Lau Chair in Electrical and Computer Engineering. His current research interests are in networked systems and mobile communications. He is an associate editor for the IEEE Transactions on Mobile Computing and has served on the editorial boards of the IEEE Transactions on Communications, the IEEE Transactions on Wireless Communications, and the Wiley Security and Communication Networks. He regularly serves on the organizational and technical committees of a number of conferences. He is a Fellow of IEEE and a member of ACM and Tau Beta Pi.


A Bayesian Framework for Online Nonconvex Optimization over Distributed Processing Networks

Zai Shi (The Ohio State University, USA); Yilin Zheng (the Ohio State University, USA); Atilla Eryilmaz (The Ohio State University, USA)

0
In many applications such as machine learning, reinforcement learning, and optimization for data centers, the increasing data size and model complexity have made it impractical to run optimizations over a single machine. Therefore, solving the distributed optimization problem has become an important task. In this work, we consider a distributed processing network \(G=(\mathcal{V},\mathcal{E})\) with \(n\) nodes, where each node \(i\) can only evaluate the values of a local function and can only communicate with its neighbors. The objective is to reach consensus on the global optimizer of \(\max_{x\in\mathcal{X}} \frac{1}{n}\sum_{i=1}^n f_i(x)\). Previous methods either assume gradient information which is not suitable for model-free learning, or consider the zeroth-order information but assume convexity of the objective functions and can only guarantee convergence to a stationary point for non-convex objectives. To address these limitations, we drop both the known gradient assumption and convexity assumption. Instead, we propose a distributed Bayesian framework for the problem with only zeroth-order information and nonconvex objective functions in a Matern Reproducing Kernel Hilbert Space. Under this framework, we propose an algorithm and show that with high probability it reaches consensus and has a sublinear regret with regard to the global optimal. The results are validated under numerical studies.
Speaker Yilin Zheng (The Ohio State University)

Yilin Zheng is a graduate student at the Ohio State University in the ECE department under the supervision of Professor Atilla Eryilmaz. His research is mainly focusing on online learning and optimization.


Constrained Bandit Learning with Switching Costs for Wireless Networks

Juaren Steiger (Queen's University, Canada); Bin Li (The Pennsylvania State University, USA); Bo Ji (Virginia Tech, USA); Ning Lu (Queen's University, Canada)

0
Bandits with arm selection constraints and bandits with switching costs have both gained recent attention in wireless networking research. Pessimistic-optimistic algorithms, which combine bandit learning with virtual queues to track the constraints, are commonly employed in the former. Block-based algorithms, where switching is disallowed within a block, are commonly employed in the latter. While efficient algorithms have been developed for both problems, it remains challenging to guarantee low regret and constraint violation in a bandit problem that includes both arm selection constraints and switching costs due to the tight coupling between the two. Here, switching may be necessary to decrease the constraint violation but comes at the cost of increased switching regret. In this paper, we tackle the constrained bandits with switching costs problem, for which we design a block-based pessimistic-optimistic algorithm. We identify three timely wireless networking applications for this framework in edge computing, mobile crowdsensing, and wireless network selection. We also prove that our algorithm achieves sublinear regret and vanishing constraint violation and corroborate these results with synthetic simulations and extensive trace-based simulations in the wireless network selection setting.
Speaker Juaren Steiger (Queen's University)

Juaren Steiger is a PhD student at Queen's University in Canada who is studying online learning and its applications to wireless communications.


Session Chair

Ruozhou Yu

Session E-5

Wireless/Mobile Security 1

Conference
1:30 PM — 3:00 PM EDT
Local
May 18 Thu, 1:30 PM — 3:00 PM EDT
Location
Babbio 219

Secure and Robust Two Factor Authentication via Acoustic Fingerprinting

Yanzhi Ren, Tingyuan Yang and Zhiliang Xia (University of Electronic Science and Technology of China, China); Hongbo Liu (Electronic Science and Technology of China, China); Yingying Chen (Rutgers University, USA); Nan Jiang and Zhaohui Yuan (East China Jiaotong University, China); Hongwei Li (University of Electronic Science and Technology of China, China)

0
The two-factor authentication (2FA) has become pervasive as the mobile devices become prevalent. In this work, we propose a secure 2FA that utilizes the individual acoustic fingerprint of the speaker/microphone on enrolled device as the second proof. The main idea behind our system is to use both magnitude and phase fingerprints derived from the frequency response of the enrolled device by emitting acoustic beep signals alternately from both enrolled and login devices and receiving their direct arrivals for 2FA. Given the input microphone samplings, our system designs an arrival time detection scheme to accurately identify the beginning point of the beep signal from the received signal. To achieve a robust authentication, we develop a new distance mitigation scheme to eliminate the impact of transmission distances from the sound propagation model for extracting stable fingerprint in both magnitude and phase domain. Our device authentication component then calculates a weighted correlation value between the device profile and fingerprints extracted from run-time measurements to conduct the device authentication for 2FA. Our experimental results show that our proposed system is accurate and robust to both random impersonation and Man-in-the-middle (MiM) attack across different scenarios and device models.
Speaker Yingying Chen(Rutgers University)

Yingying Chen is the Department Chair and Professor of Electrical and Computer Engineering and the Peter D. Cherasia Faculty Scholar at Rutgers University. She is the Associate Director of the Wireless Information Network Laboratory (WINLAB). She also leads the Data Analysis and Information Security Laboratory (DAISY). She is a National Academy of Inventors (NAI) Fellow, an Institute of Electrical and Electronics Engineers (IEEE) Fellow, and an Asia-Pacific Artificial Intelligence Association (AAIA) Fellow. She is also named as an ACM Distinguished Scientist. Her background is a combination of Computer Science, Computer Engineering and Physics. She has co-authored three books Securing Emerging Wireless Systems (Springer 2009) and Pervasive Wireless Environments: Detecting and Localizing User Spoofing (Springer 2014) and Sensing Vehicle Conditions for Detecting Driving Behaviors (Springer 2018), and published 240+ journal articles and referred conference papers. She also obtained many patents with multiple of them being licensed and commercialized by industry. 


Secur-Fi: A Secure Wireless Sensing System Based on Commercial Wi-Fi Devices

Xuanqi Meng, Jiarun Zhou, Xiulong Liu, Xinyu Tong, Wenyu Qu and Jianrong Wang (Tianjin University, China)

0
Wi-Fi sensing technology plays an important role in numerous IoT applications such as virtual reality, smart homes and elder healthcare. The basic principle is to extract physical features from the Wi-Fi signals to depict the user's locations or behaviors. However, current research focuses more on improving the sensing accuracy but neglects the security concerns. Specifically, current Wi-Fi router usually transmits a strong signal, so that we can access the Internet even through the wall. Accordingly, the outdoor adversaries are able to eavesdrop on this strong Wi-Fi signal, and infer the behavior of indoor users in a non-intrusive way, while the indoor users are unaware of this eavesdropping. To prevent outside eavesdropping, we propose Secur-Fi, a secure Wi-Fi sensing system. Our system meets the following two requirements: (1) we can generate fraud signals to block outside unauthorized Wi-Fi sensing; (2) we can recover the signal, and enable authorized Wi-Fi sensing. We implement the proposed system on commercial Wi-Fi devices and conduct experiments in three applications including passive tracking, behavior recognition, and breath detection. The experiment results show that our proposed approaches can reduce the accuracy of unauthorized sensing by 130% (passive tracking), 72% (behavior recognition), 86% (breath detection).
Speaker Xuanqi Meng(Tianjin University)



I Can Hear You Without a Microphone: Live Speech Eavesdropping From Earphone Motion Sensors

Yetong Cao and Fan Li (Beijing Institute of Technology, China); Huijie Chen (Beijing University of Technology, China); Xiaochen Liu and Chunhui Duan (Beijing Institute of Technology, China); Yu Wang (Temple University, USA)

0
Recent literature advances motion sensors mounted on smartphones and AR/VR headsets to speech eavesdropping due to their sensitivity to subtle vibrations. The popularity of motion sensors in earphones has fueled a rise in their sampling rate, which enables various enhanced features. This paper investigates a new threat of eavesdropping via motion sensors of earphones by developing EarSpy, which builds on our observation that the earphone's accelerometer can capture bone conduction vibrations (BCVs) and ear canal dynamic motions (ECDMs) associated with speaking; they enable EarSpy to derive unique information about the wearer's speech. Leveraging a study on the motion sensor measurements captured from earphones, EarSpy gains abilities to disentangle the wearer's live speech from interference caused by body motions and vibrations generated when the earphone's speaker plays audio. To enable user-independent attacks, EarSpy involves novel efforts, including a trajectory instability reduction method to calibrate the waveform of ECDMs and a data augmentation method to enrich the diversity of BCVs. Moreover, EarSpy explores effective representations from BCVs and ECDMs, and develops a convolutional neural model with Connectionist Temporal Classification (CTC) to realize accurate speech recognition. Extensive experiments involving 14 participants demonstrate that EarSpy reaches a promising recognition for the wearer's speech.
Speaker Yetong Cao

Yetong Cao is a PhD student in the Wireless and Mobile Computing Lab supervised by Prof. Fan Li in the School of Computer Science, Beijing Institute of Technology. She also works with Prof. Jun Luo in the School of Computer Science and Engineering, Nanyang Technological University. Her research fields include Human physiological signals sensing and mobile/wearable computing.



HeartPrint: Passive Heart Sounds Authentication Exploiting In-Ear Microphones

Yetong Cao (Beijing Institute of Technology, China); Chao Cai (Huazhong University of Science and Technology, China); Fan Li (Beijing Institute of Technology, China); Zhe Chen (China-Singapore International Joint Research Institute, China); Jun Luo (Nanyang Technological University, Singapore)

0
Biometrics has been increasingly integrated into wearable devices to enhance security in recent years. Meanwhile, the popularity of wearables in turn creates a unique opportunity for capturing novel biometrics leveraging various embedded sensing modalities. In this paper, we study a new biometrics combining the uniqueness of heart motion, bone conduction, and body asymmetry. Specifically, we design \pname as a passive yet secure user authentication system: it exploits the bone-conducted heart sounds captured by (widely available) dual \textit{in-ear microphone}s (IEMs) to authenticate users, while neatly leveraging IEMs renders itself transparent to users without impairing earphones' normal functions. To suppress the interference from other body sounds and audio produced by the earphones, we develop a interference elimination method using modified non-negative matrix factorization to separate heart sounds from background interference. We further explore the uniqueness of IEM-recorded heart sounds in three aspects to extract a biometric representation, based on which \pname leverages a convolutional neural model equipped with a continual learning method to achieve accurate authentication under drifting body conditions. Extensive experiments with 18 pairs of commercial earphones on 45 participants confirm that \pname can achieve 1.6% FAR and 1.8% FRR, while effectively coping with major attacks, complicated interference, and hardware diversity.
Speaker Yetong Cao

Yetong Cao is a PhD student in the Wireless and Mobile Computing Lab supervised by Prof. Fan Li in the School of Computer Science, Beijing Institute of Technology. She also works with Prof. Jun Luo in the School of Computer Science and Engineering, Nanyang Technological University. Her research fields include Human physiological signals sensing and mobile/wearable computing.



Session Chair

Tamer Nadeem

Session E-6

Wireless/Mobile Security 2

Conference
3:30 PM — 5:00 PM EDT
Local
May 18 Thu, 3:30 PM — 5:00 PM EDT
Location
Babbio 219

Secure Device Trust Bootstrapping Against Collaborative Signal Modification Attacks

Xiaochan Xue, Shucheng Yu and Min Song (Stevens Institute of Technology, USA)

1
Bootstrapping security among wireless devices without prior-shared secrets is frequently demanded in emerging wireless and mobile applications. One promising approach for this problem is to utilize in-band physical-layer radio-frequency (RF) signals for authenticated key establishment because of the efficiency and high usability. However, existing in-band authenticated key agreement (AKA) protocols are mostly vulnerable to Man-in-the-Middle (MitM) attacks, which can be launched by modifying the transmitted wireless signals over the air. By annihilating legitimate signals and injecting malicious signals, signal modification attackers are able to completely control the communication channels and spoof victim wireless devices. State-of-the-art (SOTA) techniques addressing such attacks require additional auxiliary hardware or are limited to single attackers. This paper proposes a novel in-band security bootstrapping technique that can thwart colluding signal modification attackers. Different from SOTA solutions, our design is compatible with commodity devices without requiring additional hardware. We achieve this based on internal randomness of each device that is unpredictable to attackers. Any modification to RF signals will be detected with high probabilities. Extensive security analysis and experimentation on USRP platform demonstrate effectiveness of our design under various attack strategies.
Speaker Xiaochan Xue (Stevens Institute of Technology)

Xiaochan Xue is a third-year Ph.D. student in the Department of Electrical and Computer Engineering at Stevens Institute of Technology. She holds a master’s degree and B.S. degree from Stevens Institute of Technology in 2020 and Jilin Univerity in China in 2017, respectively. Her research focuses on physical layer wireless security, data privacy-preserving for federated learning, and millimeter-wave networks.


Voice Liveness Detection with Sound Field Dynamics

Qiang Yang (The Hong Kong Polytechnic University, Hong Kong); Kaiyan Cui (The Hong Kong Polytechnic University, China); Yuanqing Zheng (The Hong Kong Polytechnic University, Hong Kong)

0
Voice assistants are widely integrated into a variety of smart devices, enabling users to easily complete daily tasks and even critical operations like online transactions with voice commands. Thus, once attackers replay an unauthorized voice command by loudspeakers to compromise users' voice assistants, this operation will cause serious consequences, such as information leakage and property loss. Unfortunately, most existing voice liveness detection approaches mainly rely on detecting lip motions or subtle physiological features in speech, which are limited within a very short range. In this paper, we propose VoShield to check whether a voice command is from a real user or a loudspeaker imposter. VoShield measures sound field dynamics, a feature that changes fast as the human mouths dynamically open and close. In contrast, it would remain rather stable for loudspeakers due to the fixed size.
This feature enables VoShield to extend the working distance and remain resilient to user locations. To evaluate VoShield, we conducted comprehensive experiments with various settings in different working scenarios. The results show that VoShield can achieve a detection accuracy of 98.2% and an Equal Error Rate of 2.0%, which serves as a promising complement to current voice authentication systems for smart devices.
Speaker Qiang Yang (The Hong Kong Polytechnic University)

Qiang Yang is a fresh Ph.D. graduate from The Hong Kong Polytechnic University. He has broad research interests in ubiquitous computing and acoustic sensing. His research has been published in well-recognized conferences and journals like UbiComp, INFOCOM, ICDCS, IPSN, and TMC. He is open to postdoc positions.


Expelliarmus: Command Cancellation Attacks on Smartphones using Electromagnetic Interference

Ming Gao (Zhejiang University, China); Fu Xiao (Nanjing University of Posts and Telecommunications, China); Weiran Liu, Wentao Guo, Yangtao Huang and Yajie Liu (Zhejiang University, China); Jinsong Han (Zhejiang University & School of Cyber Science and Technology, China)

0
Human-machine interactions (HMIs), e.g., touchscreens, are essential for users to interact with mobile devices. They are also beneficial in resisting emerging active attacks, which aim at maliciously controlling mobile devices, e.g., smartphones and tablets. With touchscreen-like HMIs, users can notice and interrupt malicious actions conducted by the attackers timely and perform necessary countermeasures, e.g., tapping the ‘Quit' button on the touchscreen. However, the effect of HMI-oriented active attacks has not been investigated yet. In this paper, we present a practical attack towards touchscreen-based devices, namely Expelliarmus. It reveals a new attack surface of active attacks for hijacking users' operations and thus taking full control over victim devices. Expelliarmus neutralizes users' touch commands by producing a reverse current via electromagnetic interference (EMI). Since the reverse current offsets the current change caused by a touch, the touchscreen detects no current change and thus ignores users' commands. Besides this basic denial-of-service attack, we also realize a target cancellation attack, which can neutralize target commands, e.g., ‘Quit' without interference in irrelevant operations. Thus, the active attack can be completely performed without interruption from users, even if they are alerted by the abnormal events. Extensive evaluations demonstrate the effectiveness of Expelliarmus on 29 off-the-shelf devices.
Speaker Ming Gao (Zhejiang University)

Ming Gao is a Ph.D. student at the School of Cyber Science and Technology, Zhejiang University, under the supervision of Prof. Jinsong Han. His research interests lie in mobile computing, wireless sensing, and cyber-physical security.


Nowhere to Hide: Detecting Live Video Forgery via Vision-WiFi Silhouette Correspondence

Xinyue Fang, Jianwei Liu and Yike Chen (Zhejiang University, China); Jinsong Han (Zhejiang University & School of Cyber Science and Technology, China); Kui Ren and Gang Chen (Zhejiang University, China)

0
For safety guard and crime prevention, video surveillance systems have been pervasively deployed in many security-critical scenarios, such as the residence, retail stores, and banks. However, these systems could be infiltrated by the adversary and the video streams would be modified or replaced, i.e., under the video forgery attack. The prevalence of Internet of Things (IoT) devices and the emergence of Deepfake-like techniques severely emphasize the vulnerability of video surveillance systems under such attacks. To secure existing surveillance systems, in this paper we propose a vision-WiFi cross-modal video forgery detection system, namely WiSil. Leveraging a theoretical model based on the principle of signal propagation, WiSil constructs wave front information of the object in the monitoring area from WiFi signals. With a well-designed deep learning network, WiSil further recovers silhouettes from the wave front information. Based on a Siamese network-based semantic feature extractor, WiSil can eventually determine whether a frame is manipulated by comparing the semantic feature vectors extracted from the video's silhouette with those extracted from the WiFi's silhouette. Extensive experiments show that WiSil can achieve 95% accuracy in detecting tampered frames. Moreover, WiSil is robust against environment and person changes.
Speaker Xinyue Fang (Zhejiang University)

Sep 2020 – Now Zhejiang University, Doctoral Student, Computer Science and Technology, Supervisor: Prof. Jinsong Han


Session Chair

Yanjun Pan


Gold Sponsor


Gold Sponsor


Bronze Sponsor


Student Travel Grants


Student Travel Grants


Local Organizer

Made with in Toronto · Privacy Policy · INFOCOM 2020 · INFOCOM 2021 · INFOCOM 2022 · © 2023 Duetone Corp.